perm filename DOYLE.DOC[S80,JMC] blob
sn#517179 filedate 1980-06-19 generic text, type C, neo UTF8
COMMENT ā VALID 00012 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00003 00002 Supplementary proposal
C00004 00003 1
C00009 00004 Modelling Deliberation, Action and Introspection 2
C00014 00005 Modelling Deliberation, Action and Introspection 3
C00017 00006 Modelling Deliberation, Action and Introspection 4
C00020 00007 Modelling Deliberation, Action and Introspection 5
C00024 00008 Modelling Deliberation, Action and Introspection 6
C00028 00009 Modelling Deliberation, Action and Introspection 7
C00031 00010 Modelling Deliberation, Action and Introspection 8
C00033 00011 Modelling Deliberation, Action and Introspection 9
C00035 00012
C00036 ENDMK
Cā;
Supplementary proposal
Bob,
Here is the text of a supplementary proposal covering Jon Doyle's work, as
discussed earlier. We have started pushing it through the paper mill, so
that a formal submission will reach you quite soon. If you spot any
problems, please let us know.
Regards,
Les
-----------
1
1. Modelling Deliberation, Action and Introspection
This is a proposal to extend the work of the Formal Reasoning
Group of the Stanford Artificial Intelligence Laboratory to study
programs modelling deliberation, action and introspection. The work
will be done by Jon Doyle and by John McCarthy. Doyle's work requires
additional DARPA support amounting to $62,055 for the period October
1, 1980 thru October 31, 1981 as described in the budgetary section of
this proposal. No additional support is requested for McCarthy.
Advanced intelligent computer programs must reason about the
effects of their potential future actions, and this includes reasoning
about their own ability to solve problems by reason and action. A
promising approach is to regard reasoning itself as a species of
action that whose effects can be reasoned about. Thus the program
must reason about what it would be able to do or would do in
hypothetical future circumstances. Carrying this out accurately and
effectively involves self-observation akin to human introspection.
Self-observation includes making and examining traces of inferences
made so that when a conclusion has to be revised, the reasoning that
led to it can be identified and the assumption or conjectural
conclusion that has to be taken back can be located.
Much human decision making involves processes that may be
called dialectical argumentation. Reasons for and against a
contemplated course of action are developed and played against one
another. This process, which we believe is also required for advanced
computer intelligence, is quite different from the mathematical
deductions heretofore carried out by computer programs.
In particular, it involves non-monotonic reasoning of the
kinds recently studied by McCarthy (1980), McDermott and Doyle (1980),
Doyle (1979) and Reiter (1980). The identification of non-monotonic
reasoning as a process distinct from logical deduction but just as
formal was accomplished in various formalisms in the above cited
papers. It represents a major discovery by workers in AI. Logicians
and philosophers have from time to time asserted the existence of non-
deductive modes of reasoning but have generally supposed them to be
non-formalizable. Now that AI has established an entry into this
field, logicians and philosophers are also beginning to work on it.
Doyle's proposals for self-argumentation go a step farther.
Consider what happens when a person proposes an argument and then
thinks of a counter-argument. The original argument leads to a
certain conclusion. If this conclusion were a logical consequence, in
the sense of mathematical logic, then no additional considerations
would change the conclusion unless some of the premises were found to
be incorrect. Actually such arguments are almost never logical
deductions but non-monotonic consequences of two kinds.
Modelling Deliberation, Action and Introspection 2
One kind of non-monotonic conclusion is the default. A
default is represented by a sentence that is taken to be true provided
other sentences being considered don't refute it. An example from
Doyle (1979) is "The meeting is on Wednesday unless there is a reason
why not". A program will use this default to conclude that the
meeting is on Wednesday unless it has a sentence asserting something
incompatible like a conflicting meeting on Wednesday.
Another kind of non-monotonic consequence occurs when the
facts at a person's or program's disposal show the existence of
certain objects of a given kind, and the person or program concludes
that these are all of the objects of the given kind. Thus we may
know that a boat has a leak and lacks oars, and we may conjecture
(non-monotonically) that these are the only "things" wrong with the
boat.
Argument, whether with another person or with oneself, often
involves finding reasons permitting such non-monotonically obtained
conclusions and then finding counter arguments. In the above
examples, a counter argument might involve a non-monotonic deduction
that there is another group in Wednesday's meeting room or that the
boat also has a broken rudder. Counter-counter arguments may involve
asserting the existence of another room or a plan for fixing the
rudder.
We propose to extend the work of our Formal Reasoning Group to
develop theories and write programs that will decide what to do by
reasoning that includes introspection and dialectical argumentation.
This work will be based on ideas in Jon Doyle's (1980) PhD
dissertation and on approaches to non-monotonic reasoning by Doyle and
by McCarthy.
Doyle's thesis investigates the problem of controlling or
directing the reasoning and actions of a computer program. The basic
approach explored is to view reasoning as a species of action, so that
a program might apply its reasoning powers to the task of deciding
what inferences to make as well as deciding what other actions to
take. Doyle proposed a design for the architecture of reasoning
programs. This architecture involves several of the features
mentioned earlier, including self-consciousness, intentional actions,
deliberate adaptations, and a form of decision-making based on
dialectical argumentation.
A program based on this architecture would inspect itself,
describe aspects of itself to itself, and use this self-reference and
these self-descriptions in making decisons and taking actions. The
program's mental life would include awareness of its own concepts,
beliefs, desires, intentions, inferences, actions, and skills. All of
Modelling Deliberation, Action and Introspection 3
these are represented by self-descriptions in a single sort of
language, so that the program has access to all of these aspects of
itself, and can reason about them in the same terms.
During the thirteen month period of this addition to work, the
studies will mainly be conceptual. This is partly because these ideas
require additional theoretical work before they can be embodied in
programs and partly because programs that keep a trace of their own
reasoning processes may require bigger memories than are currently
available for single processes at Stanford. In any case,
implementation will be a large enough project to require very careful
planning.
By the end of 1981, we will have planned a system that will be
able to reason about its own actions and "thoughts" and carry out
internal arguments as well as simpler forms of non-monotonic
reasoning.
References
Doyle, J. (1979): "A truth maintainance system", Artificial
Intelligence 12, 231-272
Doyle, J. (1980): "A model for deliberation, action, and
introspection" PhD Thesis, MIT AI Laboratory TR-581
McCarthy, J. (1980): "Circumscription - a form of non-monotonic
reasoning" Stanford Artificial Intelligence Laboratory Memo AIM-334
McDermott, Drew and Jon Doyle (1980): "Non-monotonic logic I",
Artificial Intelligence, to appear
Reiter, Raymond (1980): "A logic for default reasoning", Artificial
Intelligence, to appear
Modelling Deliberation, Action and Introspection 4
CURRICULUM VITAE
of
Jon Doyle
Upcoming Position:
Research Associate
Artificial Intelligence Laboratory
Stanford University
Present Position:
Scientist
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Address:
305 Memorial Drive, #612-A
Cambridge, Massachusetts 02139
Telephone: (617) 494-9214
Citizenship: United States of America
Principal Fields of Professional Interest:
Intelligence
Theory of Computation
Philosophy
Logic
Mathematics
Physics
Education:
1971-72 South Texas Junior College, Houston, Texas.
1972-74 University of Houston, B.S. in Mathematics, December 1974.
Senior Honors Thesis under Prof. J. A. Schatz on "Computational
Investigations of Non-Repetitive Sequences."
1975-77 M.I.T., S.M. in Electrical Engineering and Computer Science,
June 1977. Thesis under Prof. G. J. Sussman on "Truth Maintenance
Systems for Problem Solving."
1977-80 M.I.T., Ph.D. in Artificial Intelligence, June 1980.
Thesis under Prof. G. J. Sussman on "A Model for Deliberation,
Action, and Introspection." Profs. M. Minsky, P. Szolovits, and
D. McDermott (Yale), readers.
Modelling Deliberation, Action and Introspection 5
History of Employment:
May 1974 - August 1974 Symbiotics International, Inc., Houston, Texas
Development and maintenance of business accounting programs.
January 1975 - July 1975 Shell Oil Company, Houston, Texas
Development of a user environment for geophysical processing in the
Technical Computing Division.
nofill;
Awards Received:
1975-1980 Fannie and John Hertz Foundation Graduate Fellowship
1975 NSF Honorable Mention
1974 Summa Cum Laude, University of Houston
1974 Honors in Mathematics, University of Houston
1974 Honors Program, University of Houston
1974 First prize Mathematics Contest, University of Houston
1973 Third prize Mathematics Contest, University of Houston
1973 Outstanding First-year Student of Russian,
University of Houston
Current Organization Memberships:
American Association for the Advancement of Science
American Mathematical Society
Association for Computing Machinery
Boston Museum of Fine Arts
Mathematical Association of America
MIT Musical Theater Guild
Sigma Xi
Sigart, Pi Mu Epsilon, Phi Kappa Phi, Omega Delta Kappa
Past Organization Memberships:
MIT Choral Society (1979)
MIT Symphony Orchestra (1977-78)
Offices:
Director of Univ. of Houston chapter of Pi Mu Epsilon (1974)
MIT Ashdown House Executive Committee (1978-79)
Associate Editor of ACM SIGART Newsletter (1978-79)
Publications:
Papers in Refereed Journals:
1. Jon Doyle and Ronald L. Rivest, "Linear Expected Time of a Simple
Union-Find Algorithm", Information Processing Letters, Volume 5,
Number 5, (November 1976), pp. 146-148.
2. Jon Doyle, "A Truth Maintenance System" Artificial Intelligence 12
(1979), 231-272. Also MIT AI Lab Memo 521, June 1979.
Modelling Deliberation, Action and Introspection 6
3. Drew McDermott and Jon Doyle, "Non-Monotonic Logic I", to appear in
Artificial Intelligence, 1980. Also MIT AI Lab Memo 468, August
1978. Abstract in Notices of the AMS, V. 26, No. 1 (Jan. 1979),
#79T-E4, p. A-16.
Papers in Other Journals
1. Jon Doyle and Philip London, "A Selected Descriptor-Indexed
Bibliography to the Literature on Belief Revision", ACM SIGART
Newsletter, No. 71, 7-23, 1980. Also MIT AI Lab Memo 568, February
1980.
2. Jon Doyle "Historical Annotations and Humble Databases", to
appear in ACM SIGMOD Record.
Proceedings of Refereed Conferences:
1. Johan de Kleer, Jon Doyle, Guy L. Steele Jr. and Gerald Jay
Sussman, "AMORD: Explicit Control of Reasoning", Proc. ACM
Conference on AI and Programming Languages, Rochester, New York,
August 1977. Also MIT AI Lab Memo 427 ("Explicit Control of
Reasoning"), June 1977.
2. Jon Doyle, "Truth Maintenance Systems for Problem Solving", Proc.
Fifth International Joint Conference on Artificial Intelligence,
Cambridge, Massachusetts, August 1977. Also MIT AI Lab TR-419,
January 1978.
3. Jon Doyle, "A Glimpse of Truth Maintenance", Proc. Fourth Workshop
on Automated Deduction, Austin, Texas, February 1979. Also Proc.
Sixth International Joint Conference on Artificial Intelligence,
Tokyo, Japan, August 1979. Also MIT AI Lab Memo 461, February
1978.
4. Drew McDermott and Jon Doyle, "Non-Monotonic Logic I (extended
abstract)", Proc. Fourth Workshop on Automated Deduction, Austin,
Texas, February 1979.
5. Drew McDermott and Jon Doyle, "An Introduction to Non-Monotonic
Logic", Proc. Sixth International Joint Conference on Artificial
Intelligence, Tokyo, Japan, August 1979.
Unrefereed Conferences:
1. Jon Doyle, "Non-Repetitive Binary Sequences", 727th Meeting of
the American Mathematical Society, Cambridge, Massachusetts,
October, 1975. Abstract in Notices of the AMS, Oct. 1975, p. A-
660, #727-A5. Referee recommended but scooped for J. Combinatorial
Modelling Deliberation, Action and Introspection 7
Theory A. My theorems 1 and 2 paraphrase theorems 1 and 2 of F. M.
Dekking, "On Repetitions of Blocks in Binary Sequences," J. C. T. A
20 #3 (May 1976) 292-299.
Articles in Books:
1. Johan de Kleer, Jon Doyle, Guy L. Steele Jr. and Gerald Jay
Sussman, "Explicit Control of Reasoning", Artificial Intelligence:
An MIT Perspective, P. H. Winston and R. H. Brown, editors, MIT
Press series in Artificial Intelligence, Cambridge, 1979.
2. Jon Doyle, "A Glimpse of Truth Maintenance", Artificial
Intelligence: An MIT Perspective, P. H. Winston and R. H. Brown,
editors, MIT Press series in Artificial Intelligence, Cambridge,
1979.
Internal Reports:
1. Jon Doyle, "Analysis by Propagation of Constraints in Elementary
Geometry Problem Solving", MIT AI Lab WP-108, June 1976.
2. Jon Doyle, "The Use of Dependency Relationships in the Control of
Reasoning", MIT AI Lab WP-133, November 1976.
3. Jon Doyle, "Hierarchy in Knowledge Representations", MIT AI Lab
WP-159, November 1977.
4. Johan de Kleer, Jon Doyle, Charles Rich, Guy L. Steele Jr., and
Gerald Jay Sussman, "AMORD: A Deductive Procedure System", MIT AI
Lab Memo 435, January 1978.
Invited Lectures:
1. "What if the Mayans had Escalators", Yale University, Computer
Science Department, June 23, 1978.
2. "Reflexive Interpreters: A Model for Deliberation, Action, and
Introspection", University of Pennsylvania, Computer Science
Department, October 30, 1979.
3. "Reasoned Deliberation and Decision-Making" SRI International,
January 14, 1980.
4. "Reasoned Deliberation and Decision-Making" Hewlett-Packard, Inc.,
January 15, 1980.
5. "Reasoned Deliberation and Decision-Making" Stanford University,
Computer Science Department, January 16, 1980.
Modelling Deliberation, Action and Introspection 8
6. "Reasoned Deliberation and Decision-Making" Xerox Palo Alto
Research Center, January 17, 1980.
7. "Reasoned Deliberation and Decision-Making" USC Information
Sciences Institute, January 24, 1980.
8. "Reasoned Deliberation and Decision-Making" Bell Laboratories,
February 14, 1980.
Personal Background and Interests:
Born and raised in Houston, Texas and, in summers, Dundee, Wisconsin,
by Leo M. Doyle and Marilyn C. Doyle. Unmarried. Interests in
people, writing, literature, history, economics, musical and
poetical composition, conducting, viola, recorders, painting,
sculpture, technology, swimming and other sports.
Modelling Deliberation, Action and Introspection 9
Basic AI & Formal Reasoning
Proposed budget addition to Contract MDA 903-80-C-0102 (DARPA Order
No. 2494) for the period 1 October 1980 through 31 October 1981.
Professional
Doyle, Jon 27,952
----------
Salary total 27,952
Staff benefits (21.15% of salaries) 5,912
Travel 1,173
Computer cost for 1 person(s) 2,002
Other direct expenses 2,236
----------
Total direct costs 39,275
Indirect costs (58% of above) 22,780
----------
Project total 62,055